Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein.
translated by 谷歌翻译
当许多松散相关的未标记数据可用并且稀缺标记的数据时,机器智能的范式从纯粹的监督学习转变为更实用的情况。大多数现有算法都假定基础任务分布是固定的。在这里,我们考虑了随着时间的推移,该任务分布中的一个更现实和具有挑战性的环境会不断发展。我们将这个问题称为半监督的元学习,并具有不断发展的任务分布,缩写为集合。在这种更现实的环境中出现了两个关键挑战:(i)在存在大量未标记的分发(OOD)数据的情况下,如何使用未标记的数据; (ii)如何防止由于任务分配转移而导致先前学习的任务分布的灾难性遗忘。我们提出了一种强大的知识和知识保留的半监督元学习方法(秩序),以应对这两个主要挑战。具体而言,我们的订单引入了一种新型的共同信息正则化,以使用未标记的OOD数据鲁棒化模型,并采用最佳的运输正规化来记住以前在特征空间中学习的知识。此外,我们在一个非常具有挑战性的数据集上测试我们的方法:大规模非平稳的半监督任务分布的集合,该任务分布由(至少)72K任务组成。通过广泛的实验,我们证明了拟议的订单减轻了忘记不断发展的任务分布,并且对OOD数据比相关的强基础更强大。
translated by 谷歌翻译
无任务持续学习(CL)旨在学习非平稳数据流,而无需明确的任务定义,不要忘记以前的知识。广泛采用的内存重播方法可能会逐渐对长数据流有效,因为该模型可能会记住存储的示例并过度拟合内存缓冲区。其次,现有方法忽略了内存数据分布的高不确定性,因为内存数据分布与所有先前数据示例的分布之间存在很大差距。为了解决这些问题,我们首次提出了一个原则的内存演进框架,以使内存缓冲区逐渐难以通过分布强大的优化(DRO)来动态发展内存数据分布。然后,我们得出了一个方法家族,以通过Wasserstein梯度流(WGF)在连续概率中进化内存缓冲区数据。所提出的DRO是W.R.T最糟糕的记忆数据分布,因此保证了模型性能,并且比现有基于内存重新播放的方法更加可靠的功能。对现有基准测试的广泛实验证明了拟议方法减轻遗忘的有效性。作为拟议框架的副产品,与现有的无任务CL方法相比,我们的方法对对抗性示例更强大。
translated by 谷歌翻译
With the development of machine learning and data science, data sharing is very common between companies and research institutes to avoid data scarcity. However, sharing original datasets that contain private information can cause privacy leakage. A reliable solution is to utilize private synthetic datasets which preserve statistical information from original datasets. In this paper, we propose MC-GEN, a privacy-preserving synthetic data generation method under differential privacy guarantee for machine learning classification tasks. MC-GEN applies multi-level clustering and differential private generative model to improve the utility of synthetic data. In the experimental evaluation, we evaluated the effects of parameters and the effectiveness of MC-GEN. The results showed that MC-GEN can achieve significant effectiveness under certain privacy guarantees on multiple classification tasks. Moreover, we compare MC-GEN with three existing methods. The results showed that MC-GEN outperforms other methods in terms of utility.
translated by 谷歌翻译
标准联合优化方法成功地适用于单层结构的随机问题。然而,许多当代的ML问题 - 包括对抗性鲁棒性,超参数调整和参与者 - 批判性 - 属于嵌套的双层编程,这些编程包含微型型和组成优化。在这项工作中,我们提出了\ fedblo:一种联合交替的随机梯度方法来解决一般的嵌套问题。我们在存在异质数据的情况下为\ fedblo建立了可证明的收敛速率,并引入了二聚体,最小值和组成优化的变化。\ fedblo引入了多种创新,包括联邦高级计算和降低方差,以解决内部级别的异质性。我们通过有关超参数\&超代理学习和最小值优化的实验来补充我们的理论,以证明我们方法在实践中的好处。代码可在https://github.com/ucr-optml/fednest上找到。
translated by 谷歌翻译
由于知识图(kgs)的不完整,旨在预测kgs中未观察到的关系的零照片链接预测(ZSLP)引起了研究人员的最新兴趣。一个常见的解决方案是将关系的文本特征(例如表面名称或文本描述)用作辅助信息,以弥合所见关系和看不见的关系之间的差距。当前方法学习文本中每个单词令牌的嵌入。这些方法缺乏稳健性,因为它们遭受了量不足(OOV)的问题。同时,建立在字符n-grams上的模型具有为OOV单词生成表达式表示的能力。因此,在本文中,我们提出了一个为零链接预测(HNZSLP)的层次N-gram框架,该框架考虑了ZSLP的关系n-gram之间的依赖项。我们的方法通过首先在表面名称上构造层次n-gram图来进行起作用,以模拟导致表面名称的N-gram的组织结构。然后,将基于变压器的革兰amtransformer呈现,以建模层次n-gram图,以构建ZSLP的关系嵌入。实验结果表明,提出的HNZSLP在两个ZSLP数据集上实现了最先进的性能。
translated by 谷歌翻译
从自然语言问题中构建查询图是在知识图上回答复杂问题(复杂KGQA)的重要一步。通常,如果正确构建其查询图,可以正确回答问题,然后通过针对kg发出查询图来检索正确的答案。因此,本文着重于自然语言问题的查询图生成。查询图生成的现有方法忽略了问题的语义结构,从而导致大量破坏预测准确性的嘈杂的查询图候选者。在本文中,我们从kgqa中的常见问题定义了六个语义结构,并开发了一种新颖的结构,以预测问题的语义结构。通过这样做,我们可以首先过滤嘈杂的候选查询图,然后使用基于BERT的排名模型对剩余的候选人进行排名。与最先进的艺术相比,对两个流行的基准metaqa和WebQuestionsSP(WSP)进行了广泛的实验,证明了我们方法的有效性。
translated by 谷歌翻译
In continual learning (CL), the goal is to design models that can learn a sequence of tasks without catastrophic forgetting. While there is a rich set of techniques for CL, relatively little understanding exists on how representations built by previous tasks benefit new tasks that are added to the network. To address this, we study the problem of continual representation learning (CRL) where we learn an evolving representation as new tasks arrive. Focusing on zero-forgetting methods where tasks are embedded in subnetworks (e.g., PackNet), we first provide experiments demonstrating CRL can significantly boost sample efficiency when learning new tasks. To explain this, we establish theoretical guarantees for CRL by providing sample complexity and generalization error bounds for new tasks by formalizing the statistical benefits of previously-learned representations. Our analysis and experiments also highlight the importance of the order in which we learn the tasks. Specifically, we show that CL benefits if the initial tasks have large sample size and high "representation diversity". Diversity ensures that adding new tasks incurs small representation mismatch and can be learned with few samples while training only few additional nonzero weights. Finally, we ask whether one can ensure each task subnetwork to be efficient during inference time while retaining the benefits of representation learning. To this end, we propose an inference-efficient variation of PackNet called Efficient Sparse PackNet (ESPN) which employs joint channel & weight pruning. ESPN embeds tasks in channel-sparse subnets requiring up to 80% less FLOPs to compute while approximately retaining accuracy and is very competitive with a variety of baselines. In summary, this work takes a step towards data and compute-efficient CL with a representation learning perspective. GitHub page: https://github.com/ucr-optml/CtRL
translated by 谷歌翻译
现代机器学习问题中的不平衡数据集是司空见惯的。具有敏感属性的代表性课程或群体的存在导致关于泛化和公平性的担忧。这种担忧进一步加剧了大容量深网络可以完全适合培训数据,似乎在训练期间达到完美的准确性和公平,但在测试期间表现不佳。为了解决这些挑战,我们提出了自动化,一个自动设计培训损失功能的双层优化框架,以优化准确性和寻求公平目标的混合。具体地,较低级别的问题列举了模型权重,并且上级问题通过监视和优化通过验证数据的期望目标来调谐损耗功能。我们的损耗设计通过采用参数跨熵损失和个性化数据增强方案,可以为类/组进行个性化处理。我们评估我们对不平衡和群体敏感分类的应用方案的方法的好处和性能。广泛的经验评估表明了自动矛盾最先进的方法的益处。我们的实验结果与损耗功能设计的理论见解和培训验证分裂的好处相辅相成。所有代码都是可用的开源。
translated by 谷歌翻译
无示例性课程学习(CIL)是一个具有挑战性的问题,因为严格禁止从先前阶段进行排练数据,从而导致灾难性忘记深度神经网络(DNNS)。在本文中,我们提出了ivoro,这是CIL的整体框架,源自计算几何形状。我们发现Voronoi图(VD)是一个用于空间细分的经典模型,对于解决CIL问题特别有力,因为VD本身可以以增量的方式构建好 - 新添加的站点(类)只会影响接近的类别,使非连续课程几乎无法忘记。此外,为了找到用于VD构建的更好的中心,我们使用功率图与VD串联DNN,并证明可以通过使用除法和争议算法集成本地DNN模型来优化VD结构。此外,我们的VD结构不限于深度特征空间,而是适用于多个中间特征空间,将VD推广为多中心VD(CIVD),可有效捕获DNN的多元元素特征。重要的是,Ivoro还能够处理不确定性感知的测试时间Voronoi细胞分配,并且在几何不确定性和预测精度之间表现出很高的相关性(高达〜0.9)。与最先进的非exemememplar CIL方法相比,Ivoro将所有内容汇总在一起,分别在CIFAR-100,Tinyimagenet和Imagenet-Subsset上获得了高达25.26%,37.09%和33.21%的改善。总之,Ivoro可以实现高度准确,保护隐私和几何解释的CIL,当禁止使用跨相数据共享时,这特别有用,例如在医疗应用中。我们的代码可在https://machunwei.github.io/ivoro上找到。
translated by 谷歌翻译